few-shot extrapolation
One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training. One natural approach to this problem is to train agents with manually specified variation in the training task or environment. However, this may be infeasible in practical situations, either because making perturbations is not possible, or because it is unclear how to choose suitable perturbation strategies without sacrificing performance. The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training. By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations by abandoning solutions that are no longer effective and adopting those that are. We theoretically characterize a robustness set of environments that arises from our algorithm and empirically find that our diversity-driven approach can extrapolate to various changes in the environment and task.
Review for NeurIPS paper: One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
Summary and Contributions: The paper proposes an approach to learn diverse behaviors to avoid policies to be too specific for a single task and making them general and robust to variations of the task. The proposed method considers policies depending on latent variables and optimizes an objective that prefers policies with high mutual information between the trajectory and the latent variable, conditioned to the fact that those policies must be \epsilon-optimal. Differently from meta-learning, the training is carried out in a single environment while testing is done on variations. A theoretical study to justify the proposed objective is provided together with an experimental evaluation. Strengths: The main strength point of the paper is the attempt of addressing a quite challenging scenario in which training can be carried out in a single environment while testing should be done in different environments.
One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training. One natural approach to this problem is to train agents with manually specified variation in the training task or environment. However, this may be infeasible in practical situations, either because making perturbations is not possible, or because it is unclear how to choose suitable perturbation strategies without sacrificing performance. The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training. By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations by abandoning solutions that are no longer effective and adopting those that are.